10th World Congress in Probability and Statistics

Invited Session (live Q&A at Track 3, 10:30PM KST)

Invited 11

Analysis of Dependent Data (Organizer: Chae Young Lim)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Statistical learning with spatially dependent high-dimensional data

Taps Maiti (Michigan State University)

3
The rapid development of information technology is making it possible to collect massive amounts of multidimensional, multimodal data with high dimensionality in diverse fields of science and engineering. New statistical and machine learning methods have been developing continuously to solve challenging problems arising out of these complex systems. This talk will discuss a specific type of statistical learning, namely feature selection and classification, when the features are multidimensional. More specifically, they are spatio-temporal by nature. Various machine learning techniques are suitable for this problem, although their underlying statistical theories are not well established. We start with linear discriminant analysis under spatial dependence, establish its statistical properties and then connecting to other machine learning tools for flexible data analysis in the context of brain imaging data.

Large-scale spatial data science with ExaGeoStat

Marc Genton (King Abdullah University of Science and Technology (KAUST))

3
Spatial data science aims at analyzing the spatial distributions, patterns, and relationships of data over a predefined geographical region. For decades, the size of most spatial datasets was modest enough to be handled by exact inference. Nowadays, with the explosive increase of data volumes, High-Performance Computing (HPC) can serve as a tool to handle massive datasets for many spatial applications. Big data processing becomes feasible with the availability of parallel processing hardware systems such as shared and distributed memory, multiprocessors and GPU accelerators. In spatial statistics, parallel and distributed computing can alleviate the computational and memory restrictions in large-scale Gaussian process inference and prediction. In this talk, we will describe cutting-edge HPC techniques and their applications in solving large-scale spatial problems with the new software ExaGeoStat.

Multivariate spatio-temporal Hawkes process models of terrorism

Mikyoung Jun (University of Houston)

3
We develop a flexible bivariate spatio-temporal Hawkes process model to analyze pat-terns of terrorism. Previous applications of point process methods to political violence data have mainly utilized temporal Hawkes process models, neglecting spatial variation in these attack patterns. This limits what can be learned from these models, as any effective counter-terrorism strategy requires knowledge on both when and where attacks are likely to occur. Even the existing work that does exist on spatio-temporal Hawkes processes impose restrictions on the triggering function that are not well-suited for terrorism data. Therefore, we generalize the structure of the spatio-temporal triggering function considerably, allowing for nonseprability, nonstatitionarity, and cross-triggering(i.e., across the groups). To demonstrate the utility of our model, we analyze two samples of real-world terrorism data: Afghanistan (2002-2013) and Nigeria (2010-2017). Jointly, these two studies demonstrate that our model dramatically outperforms standard Hawkes process models, besting widely-used alternatives in overall model fit and revealing spatio-temporal patterns that are, by construction, masked in these models (e.g., increasing dispersion in cross-triggering over time).
This is joint work with Scott Cook.

Q&A for Invited Session 11

0
This talk does not have an abstract.

Session Chair

Chae Young Lim (Seoul National University)

Invited 19

Randomized Algorithms (Organizer: Devdatt Dubhashi)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Is your distribution in shape?

Ronitt Rubinfeld (Massachusetts Institute of Technology)

4
Algorithms for understanding data generated from distributions over large discrete domains are of fundamental importance. We consider the sample complexity of "property testing algorithms" that seek to to distinguish whether or not an underlying distribution satisfies basic shape properties. Examples of such properties include convexity, log-concavity, heavy-tailed, and approximability by $k$-histogram functions. In this talk, we will focus on the property of *monotonicity*, as tools developed for distinguishing the monotonicity property have proven to be useful for the all of the above properties as well as several others. We say a distribution $p$ is monotone if for any two comparableelements $x < y$ in the domain, we have that $p(x) < p(y)$. For example, for the classic $n$-dimensional hypercube domain, in which domain elements are described via $n$ different features, monotonicity implies that for every element, an increase in the value of one of the features can only increase its probability. We recount the development over the past nearly two decades of {\em monotonicity testing} algorithms for distributions over various discrete domains, which make no a priori assumptions on the underlying distribution. We study the sample complexity for testing whether a distribution is monotone as a function of the size of the domain, which can vary dramatically depending on the structure of the underlying domain. Not surprisingly, the sample complexity over high dimensional domains can be much greater than over low dimensional domains of the same size. Nevertheless, for many important domain structures, including high dimensional domains, the sample complexity is sublinear in the size of the domain. In contrast, when no a priori assumptions are made about the distribution, learning the distribution requires sample complexity that is linear in the size of the domain.The techniques used draw tools from a wide spectrum of areas, including statistics, optimization, combinatorics, and computational complexity theory.

Beyond independent rounding: strongly Rayleigh distributions and traveling salesperson problem

Shayan Oveis Gharan (University of Washington)

3
Strongly Rayleigh (SR) distributions are a family of probably distributions that generalize product distributions and satisfy strongest forms of negative dependence. Over the last last decade, these distributions have found numerous application in algorithm design. I this talk I will survey several fundamental properties of SR distributions and some of their applications in going beyond the limits of the Independent Rounding method for designing approximation algorithms for combinatorial optimization problems.

A survey of dependent randomized rounding

Aravind Srinivasan (University of Maryland, College Park)

3
In dependent randomized rounding, we take a point x inside a given high-dimensional object such as a polytope and “round” it probabilistically to a suitable point y, such as one with integer coordinates. The “dependence” arises from the fact that the dimensions of x are rounded in a carefully-correlated manner, so that some properties hold with probability one or with high probability, while others hold in expectation. We will discuss this methodology and some applications in combinatorial optimization and concentration of measure.

Q&A for Invited Session 19

0
This talk does not have an abstract.

Session Chair

Devdatt Dubhashi (Chalmers University)

Invited 23

Stochastic Partial Differential Equations (Organizer: Leonid Mytnik)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Phase analysis for a family of stochastic reaction-diffusion equations

Carl Mueller (University of Rochester)

3
We consider a reaction-diffusion equation, that is, a one-dimensional heat equation with a drift function V(u) and two-parameter white noise with coefficient $\sigma(u)$, subject to a “nice” initial value and periodic boundary, where x lies in T=[-1,1]. The reaction term V(u) belongs to a large family of functions that includes Fisher-KPP nonlinearities V(x)=x(1-x) as well as Allen-Cahn potentials V(x)=x(1-x)(1+x), the multiplicative nonlinearity lambda*sigma is non random and Lipschitz continuous, and $\lambda>0$ is a non-random number that measures the strength of the effect of the noise W. Our principal finding is that: (i) When lambda is sufficiently large, the above equation has a unique invariant measure; and (ii) When lambda is sufficiently small, the collection of all invariant measures is a non-trivial line segment, in particular infinite. This proves an earlier prediction of Zimmerman. Our methods also give information about the structure of these invariant measures.

Regularization by noise for SPDEs and SDEs: a stochastic sewing approach

Oleg Butkovsky (Weierstrass Institute)

5

Stochastic quantization, large N, and mean field limit

Hao Shen (University of Wisconsin-Madison)

3
“Large N problems” in quantum field theory refer to the study of models with N field components for N large. We study these problems using SPDE methods via stochastic quantization. In the SPDE setting this is formulated as mean field problems. In particular we will consider the vector Phi^4 model (i.e. linear sigma model), whose stochastic quantization is a system of N coupled dynamical Phi^4 SPDEs. We discuss a series of results. First, in 2D, we prove mean field limit for these dynamics as N goes to infinity. We also show that the quantum field theory converges to massive Gaussian free field in this limit, in both 2D and 3D. Moreover we prove exact formulae for some correlations of O(N)-invariant observables in the large N limit.

(Joint work with Scott Smith, Rongchan Zhu and Xiangchan Zhu.)

Q&A for Invited Session 23

0
This talk does not have an abstract.

Session Chair

Leonid Mytnik (Israel Institute of Technology)

Invited 26

Pathwise Stochastic Analysis (Organizer: Hendrik Weber)

Conference
10:30 PM — 11:00 PM KST
Local
Jul 22 Thu, 9:30 AM — 10:00 AM EDT

Sig-Wasserstein Generative models to generate realistic synthetic time series

Hao Ni (University College London)

3
Wasserstein generative adversarial networks (WGANs) have been very successful in generating samples, from seemingly high dimensional probability measures. However, these methods struggle to capture the temporal dependence of joint probability distributions induced by time-series data. Moreover, training WGANs is computational expensive due to the min-max formulation of the loss function. To overcome these challenges, we integrate Wasserstein GANs with mathematically principled and efficient path feature extraction called the signature of a path. The signature of a path is a graded sequence of statistics that provides a universal and principled description for a stream of data, and its expected value characterises the law of the time-series model. In particular, we develop a new metric, (conditional) Sig-W1, that captures the (conditional) joint law of time series models, and use it as a discriminator. The signature feature space enables the explicit representation of the proposed discriminators which alleviates the need for expensive training. We validate our method on both synthetic and empirical dataset and our method achieved the superior performance than the other state-of-the-art benchmark methods.

This is the joint work with Lukasz Szpruch (Uni of Edinburgh), Magnus Wiese (Uni of Kaiserslautern), Shujian Liao (UCL), Baoren Xiao (UCL).

State space for the 3D stochastic quantisation equation of Yang-Mills

Ilya Chevyrev (University of Edinburgh)

3
In this talk I will present a proposed state space of distributions for the stochastic Yang-Mills equations (SYM) in 3D. I will show how the notion of gauge equivalence extends to this space and how one can construct a Markov process on the space of gauge orbits associated with the SYM. This partly extends a recent construction done in the less singular 2D setting.

Based on a joint work in progress with Ajay Chandra, Martin Hairer, and Hao Shen.

A priori bounds for quasi-linear parabolic equations in the full sub-critical regime

Scott Smith (Chinese Academy of Sciences)

3
We will discuss quasi-linear parabolic equations driven by an additive forcing, in the full sub-critical regime. Our arguments are inspired by Hairer’s regularity structures, however we work with a more parsimonious model indexed by multi-indices rather than trees. Assuming bounds on this model, which is modified in agreement with the concept of algebraic renormalization, we prove local a priori estimates on solutions to the quasi-linear equations modified by the corresponding counter terms.

Q&A for Invited Session 26

0
This talk does not have an abstract.

Session Chair

Hendrik Weber (University of Bath)

Made with in Toronto · Privacy Policy · © 2021 Duetone Corp.